181 research outputs found

    Action potential energy efficiency varies among neuron types in vertebrates and invertebrates.

    Get PDF
    The initiation and propagation of action potentials (APs) places high demands on the energetic resources of neural tissue. Each AP forces ATP-driven ion pumps to work harder to restore the ionic concentration gradients, thus consuming more energy. Here, we ask whether the ionic currents underlying the AP can be predicted theoretically from the principle of minimum energy consumption. A long-held supposition that APs are energetically wasteful, based on theoretical analysis of the squid giant axon AP, has recently been overturned by studies that measured the currents contributing to the AP in several mammalian neurons. In the single compartment models studied here, AP energy consumption varies greatly among vertebrate and invertebrate neurons, with several mammalian neuron models using close to the capacitive minimum of energy needed. Strikingly, energy consumption can increase by more than ten-fold simply by changing the overlap of the Na+ and K+ currents during the AP without changing the APs shape. As a consequence, the height and width of the AP are poor predictors of energy consumption. In the Hodgkin–Huxley model of the squid axon, optimizing the kinetics or number of Na+ and K+ channels can whittle down the number of ATP molecules needed for each AP by a factor of four. In contrast to the squid AP, the temporal profile of the currents underlying APs of some mammalian neurons are nearly perfectly matched to the optimized properties of ionic conductances so as to minimize the ATP cost

    Detection and Implications of a Time-reversal breaking state in underdoped Cuprates

    Full text link
    We present general symmetry considerations on how a Time-reversal breaking state may be detected by angle-resolved photoemission using circularly polarized photons as has been proposed earlier. Results of recent experiments utilizing the proposal in underdoped cuprates are analysed and found to be consistent in their symmetry and magnitude with a theory of the Copper-Oxides. These togather with evidence for a quantum critical point and marginal Fermi-liquid properties near optimum doping suggest that a valid microscopic theory of the phenomena in the cuprates has been found.Comment: A statement on detecting the Anyon state is added and some typos are subtracte

    Energy-efficient coding with discrete stochastic events

    Get PDF
    We investigate the energy efficiency of signaling mechanisms that transfer information by means of discrete stochastic events, such as the opening or closing of an ion channel. Using a simple model for the generation of graded electrical signals by sodium and potassium channels, we find optimum numbers of channels that maximize energy efficiency. The optima depend on several factors: the relative magnitudes of the signaling cost (current flow through channels), the fixed cost of maintaining the system, the reliability of the input, additional sources of noise, and the relative costs of upstream and downstream mechanisms. We also analyze how the statistics of input signals influence energy efficiency. We find that energy-efficient signal ensembles favor a bimodal distribution of channel activations and contain only a very small fraction of large inputs when energy is scarce. We conclude that when energy use is a significant constraint, trade-offs between information transfer and energy can strongly influence the number of signaling molecules and synapses used by neurons and the manner in which these mechanisms represent information

    Balanced excitatory and inhibitory synaptic currents promote efficient coding and metabolic efficiency

    Get PDF
    A balance between excitatory and inhibitory synaptic currents is thought to be important for several aspects of information processing in cortical neurons in vivo, including gain control, bandwidth and receptive field structure. These factors will affect the firing rate of cortical neurons and their reliability, with consequences for their information coding and energy consumption. Yet how balanced synaptic currents contribute to the coding efficiency and energy efficiency of cortical neurons remains unclear. We used single compartment computational models with stochastic voltage-gated ion channels to determine whether synaptic regimes that produce balanced excitatory and inhibitory currents have specific advantages over other input regimes. Specifically, we compared models with only excitatory synaptic inputs to those with equal excitatory and inhibitory conductances, and stronger inhibitory than excitatory conductances (i.e. approximately balanced synaptic currents). Using these models, we show that balanced synaptic currents evoke fewer spikes per second than excitatory inputs alone or equal excitatory and inhibitory conductances. However, spikes evoked by balanced synaptic inputs are more informative (bits/spike), so that spike trains evoked by all three regimes have similar information rates (bits/s). Consequently, because spikes dominate the energy consumption of our computational models, approximately balanced synaptic currents are also more energy efficient than other synaptic regimes. Thus, by producing fewer, more informative spikes approximately balanced synaptic currents in cortical neurons can promote both coding efficiency and energy efficiency

    Stochastic Simulations on the Reliability of Action Potential Propagation in Thin Axons

    Get PDF
    It is generally assumed that axons use action potentials (APs) to transmit information fast and reliably to synapses. Yet, the reliability of transmission along fibers below 0.5 μm diameter, such as cortical and cerebellar axons, is unknown. Using detailed models of rodent cortical and squid axons and stochastic simulations, we show how conduction along such thin axons is affected by the probabilistic nature of voltage-gated ion channels (channel noise). We identify four distinct effects that corrupt propagating spike trains in thin axons: spikes were added, deleted, jittered, or split into groups depending upon the temporal pattern of spikes. Additional APs may appear spontaneously; however, APs in general seldom fail (<1%). Spike timing is jittered on the order of milliseconds over distances of millimeters, as conduction velocity fluctuates in two ways. First, variability in the number of Na channels opening in the early rising phase of the AP cause propagation speed to fluctuate gradually. Second, a novel mode of AP propagation (stochastic microsaltatory conduction), where the AP leaps ahead toward spontaneously formed clusters of open Na channels, produces random discrete jumps in spike time reliability. The combined effect of these two mechanisms depends on the pattern of spikes. Our results show that axonal variability is a general problem and should be taken into account when considering both neural coding and the reliability of synaptic transmission in densely connected cortical networks, where small synapses are typically innervated by thin axons. In contrast we find that thicker axons above 0.5 μm diameter are reliable

    The effect of cell size and channel density on neuronal information encoding and energy efficiency

    Get PDF
    Identifying the determinants of neuronal energy consumption and their relationship to information coding is critical to understanding neuronal function and evolution. Three of the main determinants are cell size, ion channel density, and stimulus statistics. Here we investigate their impact on neuronal energy consumption and information coding by comparing single-compartment spiking neuron models of different sizes with different densities of stochastic voltage-gated Na + and K + channels and different statistics of synaptic inputs. The largest compartments have the highest information rates but the lowest energy efficiency for a given voltage-gated ion channel density, and the highest signaling efficiency (bits spike -1) for a given firing rate. For a given cell size, our models revealed that the ion channel density that maximizes energy efficiency is lower than that maximizing information rate. Low rates of small synaptic inputs improve energy efficiency but the highest information rates occur with higher rates and larger inputs. These relationships produce a Law of Diminishing Returns that penalizes costly excess information coding capacity, promoting the reduction of cell size, channel density, and input stimuli to the minimum possible, suggesting that the trade-off between energy and information has influenced all aspects of neuronal anatomy and physiology

    Fly Photoreceptors Demonstrate Energy-Information Trade-Offs in Neural Coding

    Get PDF
    Trade-offs between energy consumption and neuronal performance must shape the design and evolution of nervous systems, but we lack empirical data showing how neuronal energy costs vary according to performance. Using intracellular recordings from the intact retinas of four flies, Drosophila melanogaster, D. virilis, Calliphora vicina, and Sarcophaga carnaria, we measured the rates at which homologous R1–6 photoreceptors of these species transmit information from the same stimuli and estimated the energy they consumed. In all species, both information rate and energy consumption increase with light intensity. Energy consumption rises from a baseline, the energy required to maintain the dark resting potential. This substantial fixed cost, ∼20% of a photoreceptor's maximum consumption, causes the unit cost of information (ATP molecules hydrolysed per bit) to fall as information rate increases. The highest information rates, achieved at bright daylight levels, differed according to species, from ∼200 bits s(−1) in D. melanogaster to ∼1,000 bits s(−1) in S. carnaria. Comparing species, the fixed cost, the total cost of signalling, and the unit cost (cost per bit) all increase with a photoreceptor's highest information rate to make information more expensive in higher performance cells. This law of diminishing returns promotes the evolution of economical structures by severely penalising overcapacity. Similar relationships could influence the function and design of many neurons because they are subject to similar biophysical constraints on information throughput

    Turbulent Linewidths as a Diagnostic of Self-Gravity in Protostellar Discs

    Full text link
    We use smoothed particle hydrodynamics simulations of massive protostellar discs to investigate the predicted broadening of molecular lines from discs in which self-gravity is the dominant source of angular momentum transport. The simulations include radiative transfer, and span a range of disc-to-star mass ratios between 0.25 and 1.5. Subtracting off the mean azimuthal flow velocity, we compute the distribution of the in-plane and perpendicular peculiar velocity due to large scale structure and turbulence induced by self-gravity. For the lower mass discs, we show that the characteristic peculiar velocities scale with the square root of the effective turbulent viscosity parameter, as expected from local turbulent-disc theory. The derived velocities are anisotropic, with substantially larger in-plane than perpendicular values. As the disc mass is increased, the validity of the locally determined turbulence approximation breaks down, and this is accompanied by anomalously large in-plane broadening. There is also a high variance due to the importance of low-m spiral modes. For low-mass discs, the magnitude of in-plane broadening is, to leading order, equal to the predictions from local disc theory and cannot constrain the source of turbulence. However, combining our results with prior evaluations of turbulent broadening expected in discs where the magnetorotational instability (MRI) is active, we argue that self-gravity may be distinguishable from the MRI in these systems if it is possible to measure the anisotropy of the peculiar velocity field with disc inclination. Furthermore, for large mass discs, the dominant contribution of large-scale modes is a distinguishing characteristic of self-gravitating turbulence versus MRI driven turbulence.Comment: 8 pages, 13 figures, accepted for publication in MNRA

    Spin-Singlet Quantum Hall States and Jack Polynomials with a Prescribed Symmetry

    Full text link
    We show that a large class of bosonic spin-singlet Fractional Quantum Hall model wave-functions and their quasi-hole excitations can be written in terms of Jack polynomials with a prescribed symmetry. Our approach describes new spin-singlet quantum Hall states at filling fraction nu = 2k/(2r-1) and generalizes the (k,r) spin-polarized Jack polynomial states. The NASS and Halperin spin singlet states emerge as specific cases of our construction. The polynomials express many-body states which contain configurations obtained from a root partition through a generalized squeezing procedure involving spin and orbital degrees of freedom. The corresponding generalized Pauli principle for root partitions is obtained, allowing for counting of the quasihole states. We also extract the central charge and quasihole scaling dimension, and propose a conjecture for the underlying CFT of the (k, r) spin-singlet Jack states.Comment: 17 pages, 1 figur

    Fermi-sea-like correlations in a partially filled Landau level

    Full text link
    The pair distribution function and the static structure factor are computed for composite fermions. Clear and robust evidence for a 2kF2k_F structure is seen in a range of filling factors in the vicinity of the half-filled Landau level. Surprisingly, it is found that filled Landau levels of composite fermions, i.e. incompressible FQHE states, bear a stronger resemblance to a Fermi sea than do filled Landau levels of electrons.Comment: 23 pages, revte
    corecore